Goto

Collaborating Authors

 movable part


REACT3D: Recovering Articulations for Interactive Physical 3D Scenes

Huang, Zhao, Sun, Boyang, Delitzas, Alexandros, Chen, Jiaqi, Pollefeys, Marc

arXiv.org Artificial Intelligence

Interactive 3D scenes are increasingly vital for embodied intelligence, yet existing datasets remain limited due to the labor-intensive process of annotating part segmentation, kinematic types, and motion trajectories. We present REACT3D, a scalable zero-shot framework that converts static 3D scenes into simulation-ready interactive replicas with consistent geometry, enabling direct use in diverse downstream tasks. Our contributions include: (i) openable-object detection and segmentation to extract candidate movable parts from static scenes, (ii) articulation estimation that infers joint types and motion parameters, (iii) hidden-geometry completion followed by interactive object assembly, and (iv) interactive scene integration in widely supported formats to ensure compatibility with standard simulation platforms. We achieve state-of-the-art performance on detection/segmentation and articulation metrics across diverse indoor scenes, demonstrating the effectiveness of our framework and providing a practical foundation for scalable interactive scene generation, thereby lowering the barrier to large-scale research on articulated scene understanding. Our project page is https://react3d.github.io/


Vi-TacMan: Articulated Object Manipulation via Vision and Touch

Cui, Leiyao, Zhao, Zihang, Xie, Sirui, Zhang, Wenhuan, Han, Zhi, Zhu, Yixin

arXiv.org Artificial Intelligence

Autonomous manipulation of articulated objects remains a fundamental challenge for robots in human environments. Vision-based methods can infer hidden kinematics but can yield imprecise estimates on unfamiliar objects. Tactile approaches achieve robust control through contact feedback but require accurate initialization. This suggests a natural synergy: vision for global guidance, touch for local precision. Yet no framework systematically exploits this complementarity for generalized articulated manipulation. Here we present Vi-TacMan, which uses vision to propose grasps and coarse directions that seed a tactile controller for precise execution. By incorporating surface normals as geometric priors and modeling directions via von Mises-Fisher distributions, our approach achieves significant gains over baselines (all p<0.0001). Critically, manipulation succeeds without explicit kinematic models -- the tactile controller refines coarse visual estimates through real-time contact regulation. Tests on more than 50,000 simulated and diverse real-world objects confirm robust cross-category generalization. This work establishes that coarse visual cues suffice for reliable manipulation when coupled with tactile feedback, offering a scalable paradigm for autonomous systems in unstructured environments.


ScrewSplat: An End-to-End Method for Articulated Object Recognition

Kim, Seungyeon, Ha, Junsu, Kim, Young Hun, Lee, Yonghyeon, Park, Frank C.

arXiv.org Artificial Intelligence

Figure 1: Articulated object recognition by splatting screw axes and Gaussians. Articulated objects with movable parts - such as doors, laptops, and drawers - are common in everyday environments, and manipulating them requires understanding both their 3D geometry and underlying kinematic structure (e.g., joint types and axes). While prior work has addressed this using large-scale datasets of 3D objects with annotated joint axes in supervised settings [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], such methods struggle to generalize to unseen categories - a natural limitation of supervised learning. In this work, we tackle a more challenging yet practical scenario: inferring kinematic structure directly from multi-view RGB images under varying object configurations, without relying on category-specific supervision (see the left of Figure 1). Spurred in part by the success of neural rendering-based 3D reconstruction methods that require no supervised training [12, 13, 14, 15], recent works have adapted these frameworks for articulated object recognition [16, 17, 18, 19, 20], achieving promising results using raw RGB observations. However, a key drawback of these methods lies in their reliance on strong assumptions, such as a known number of articulated components or predefined joint types.


Part$^{2}$GS: Part-aware Modeling of Articulated Objects using 3D Gaussian Splatting

Yu, Tianjiao, Shah, Vedant, Wahed, Muntasir, Shen, Ying, Nguyen, Kiet A., Lourentzou, Ismini

arXiv.org Artificial Intelligence

Articulated objects are common in the real world, yet modeling their structure and motion remains a challenging task for 3D reconstruction methods. In this work, we introduce Part$^{2}$GS, a novel framework for modeling articulated digital twins of multi-part objects with high-fidelity geometry and physically consistent articulation. Part$^{2}$GS leverages a part-aware 3D Gaussian representation that encodes articulated components with learnable attributes, enabling structured, disentangled transformations that preserve high-fidelity geometry. To ensure physically consistent motion, we propose a motion-aware canonical representation guided by physics-based constraints, including contact enforcement, velocity consistency, and vector-field alignment. Furthermore, we introduce a field of repel points to prevent part collisions and maintain stable articulation paths, significantly improving motion coherence over baselines. Extensive evaluations on both synthetic and real-world datasets show that Part$^{2}$GS consistently outperforms state-of-the-art methods by up to 10$\times$ in Chamfer Distance for movable parts.


ArtGS: Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting

Liu, Yu, Jia, Baoxiong, Lu, Ruijie, Ni, Junfeng, Zhu, Song-Chun, Huang, Siyuan

arXiv.org Artificial Intelligence

Building interactable replicas of articulated objects is a key challenge in computer vision. Existing methods often fail to effectively integrate information across different object states, limiting the accuracy of part-mesh reconstruction and part dynamics modeling, particularly for complex multi-part articulated objects. We introduce ArtGS, a novel approach that leverages 3D Gaussians as a flexible and efficient representation to address these issues. Our method incorporates canonical Gaussians with coarse-to-fine initialization and updates for aligning articulated part information across different object states, and employs a skinning-inspired part dynamics modeling module to improve both part-mesh reconstruction and articulation learning. Extensive experiments on both synthetic and real-world datasets, including a new benchmark for complex multi-part objects, demonstrate that ArtGS achieves state-of-the-art performance in joint parameter estimation and part mesh reconstruction. Our approach significantly improves reconstruction quality and efficiency, especially for multi-part articulated objects. Additionally, we provide comprehensive analyses of our design choices, validating the effectiveness of each component to highlight potential areas for future improvement. Our work is made publicly available at: https://articulate-gs.github.io. Articulated objects, central to everyday human-environment interactions, have become a key focus in computer vision research (Yang et al., 2023a; Weng et al., 2024; Luo et al., 2025; Liu et al., 2024; Deng et al., 2024). As we advance towards more sophisticated robotic systems and immersive virtual environments, there is a growing need for improved and efficient modeling techniques for the reconstruction of articulated objects. The problem of reconstructing articulated objects has been extensively studied (Liu et al., 2023a;b; Weng et al., 2024; Deng et al., 2024; Yang et al., 2023a), with a key challenge being the learning of object geometry when only partial views of the object are available at any given state.


Inter3D: A Benchmark and Strong Baseline for Human-Interactive 3D Object Reconstruction

Chen, Gan, He, Ying, Yu, Mulin, Yu, F. Richard, Xu, Gang, Ma, Fei, Li, Ming, Zhou, Guang

arXiv.org Artificial Intelligence

Recent advancements in implicit 3D reconstruction methods, e.g., neural rendering fields and Gaussian splatting, have primarily focused on novel view synthesis of static or dynamic objects with continuous motion states. However, these approaches struggle to efficiently model a human-interactive object with n movable parts, requiring 2^n separate models to represent all discrete states. To overcome this limitation, we propose Inter3D, a new benchmark and approach for novel state synthesis of human-interactive objects. We introduce a self-collected dataset featuring commonly encountered interactive objects and a new evaluation pipeline, where only individual part states are observed during training, while part combination states remain unseen. We also propose a strong baseline approach that leverages Space Discrepancy Tensors to efficiently modelling all states of an object. To alleviate the impractical constraints on camera trajectories across training states, we propose a Mutual State Regularization mechanism to enhance the spatial density consistency of movable parts. In addition, we explore two occupancy grid sampling strategies to facilitate training efficiency. We conduct extensive experiments on the proposed benchmark, showcasing the challenges of the task and the superiority of our approach.


Holistic Understanding of 3D Scenes as Universal Scene Description

Halacheva, Anna-Maria, Miao, Yang, Zaech, Jan-Nico, Wang, Xi, Van Gool, Luc, Paudel, Danda Pani

arXiv.org Artificial Intelligence

3D scene understanding is a long-standing challenge in computer vision and a key component in enabling mixed reality, wearable computing, and embodied AI. Providing a solution to these applications requires a multifaceted approach that covers scene-centric, object-centric, as well as interaction-centric capabilities. While there exist numerous datasets approaching the former two problems, the task of understanding interactable and articulated objects is underrepresented and only partly covered by current works. In this work, we address this shortcoming and introduce (1) an expertly curated dataset in the Universal Scene Description (USD) format, featuring high-quality manual annotations, for instance, segmentation and articulation on 280 indoor scenes; (2) a learning-based model together with a novel baseline capable of predicting part segmentation along with a full specification of motion attributes, including motion type, articulated and interactable parts, and motion parameters; (3) a benchmark serving to compare upcoming methods for the task at hand. Overall, our dataset provides 8 types of annotations - object and part segmentations, motion types, movable and interactable parts, motion parameters, connectivity, and object mass annotations. With its broad and high-quality annotations, the data provides the basis for holistic 3D scene understanding models. All data is provided in the USD format, allowing interoperability and easy integration with downstream tasks. We provide open access to our dataset, benchmark, and method's source code.


Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects

Weng, Yijia, Wen, Bowen, Tremblay, Jonathan, Blukis, Valts, Fox, Dieter, Guibas, Leonidas, Birchfield, Stan

arXiv.org Artificial Intelligence

We address the problem of building digital twins of unknown articulated objects from two RGBD scans of the object at different articulation states. We decompose the problem into two stages, each addressing distinct aspects. Our method first reconstructs object-level shape at each state, then recovers the underlying articulation model including part segmentation and joint articulations that associate the two states. By explicitly modeling point-level correspondences and exploiting cues from images, 3D reconstructions, and kinematics, our method yields more accurate and stable results compared to prior work. It also handles more than one movable part and does not rely on any object shape or structure priors. Project page: https://github.com/NVlabs/DigitalTwinArt


Part-Guided 3D RL for Sim2Real Articulated Object Manipulation

Xie, Pengwei, Chen, Rui, Chen, Siang, Qin, Yuzhe, Xiang, Fanbo, Sun, Tianyu, Xu, Jing, Wang, Guijin, Su, Hao

arXiv.org Artificial Intelligence

Manipulating unseen articulated objects through visual feedback is a critical but challenging task for real robots. Existing learning-based solutions mainly focus on visual affordance learning or other pre-trained visual models to guide manipulation policies, which face challenges for novel instances in real-world scenarios. In this paper, we propose a novel part-guided 3D RL framework, which can learn to manipulate articulated objects without demonstrations. We combine the strengths of 2D segmentation and 3D RL to improve the efficiency of RL policy training. To improve the stability of the policy on real robots, we design a Frame-consistent Uncertainty-aware Sampling (FUS) strategy to get a condensed and hierarchical 3D representation. In addition, a single versatile RL policy can be trained on multiple articulated object manipulation tasks simultaneously in simulation and shows great generalizability to novel categories and instances. Experimental results demonstrate the effectiveness of our framework in both simulation and real-world settings. Our code is available at https://github.com/THU-VCLab/Part-Guided-3D-RL-for-Sim2Real-Articulated-Object-Manipulation.


SM$^3$: Self-Supervised Multi-task Modeling with Multi-view 2D Images for Articulated Objects

Wang, Haowen, Zhao, Zhen, Jin, Zhao, Che, Zhengping, Qiao, Liang, Huang, Yakun, Fan, Zhipeng, Qiao, Xiuquan, Tang, Jian

arXiv.org Artificial Intelligence

Reconstructing real-world objects and estimating their movable joint structures are pivotal technologies within the field of robotics. Previous research has predominantly focused on supervised approaches, relying on extensively annotated datasets to model articulated objects within limited categories. However, this approach falls short of effectively addressing the diversity present in the real world. To tackle this issue, we propose a self-supervised interaction perception method, referred to as SM$^3$, which leverages multi-view RGB images captured before and after interaction to model articulated objects, identify the movable parts, and infer the parameters of their rotating joints. By constructing 3D geometries and textures from the captured 2D images, SM$^3$ achieves integrated optimization of movable part and joint parameters during the reconstruction process, obviating the need for annotations. Furthermore, we introduce the MMArt dataset, an extension of PartNet-Mobility, encompassing multi-view and multi-modal data of articulated objects spanning diverse categories. Evaluations demonstrate that SM$^3$ surpasses existing benchmarks across various categories and objects, while its adaptability in real-world scenarios has been thoroughly validated.